In [1]:
%autosave 10
scipy.optimize: mathematical optimisation, reducing functions, etc.f(x). Or multiple variables.scipy.optimize, statsmodels.If data is random, using np.argmin(f) (brute force) is the best you can do because there is no structure.
Completely linear, or quadratic, functions are easy to solve without numeric methods, let alone brute force, because the structure allows us to do it analytically.
Tend to be semi-structured, and often can't be expressed symbolically. Maybe there isn't even a symbolic representation.
f = lambda x: np.exp((x-4)**2)
return scipy.optimize.minimize(f, 5)
But this is extremely customisable, lots of methods. Each with strengths and weaknesses. Default is BFGS. Just change a 'method' keyword in scipy.optimize.minimize.
How to choose solver?
Characteristics of each method
Check error messages! Might say it isn't sure about solution.
In [2]:
from pulp import *
x = LpVariable('x', -10, 10)
y = LpVariable('y', -10, 10)
prob = LpProblem("Toy problem", LpMinimize)
prob += 3*x - y
prob.solve()
Out[2]:
In [3]:
prob
Out[3]:
In [4]:
(x.value(), y.value())
Out[4]:
numdifftools: numerical approximation. scipy.optimize has some methods.
In [7]:
import sympy as S
from math import *
S.var("x mu sigma", real=True)
Out[7]:
In [8]:
f = 1/(sqrt(2*S.pi)*sigma)*S.exp(-(x-mu)**2/(2*sigma**2))
f
Out[8]:
In [9]:
ll = log(f)
In [ ]: